我们在过去十年中目睹了监督学习范式的大规模增长。监督学习需要大量标记的数据来达到最先进的性能。但是,标记样本需要很多人的注释。为避免标签数据的成本,提出了自我监督的方法来利用大部分可用的未标记数据。本研究对特征表示的自我监督范式的最新发展进行了全面和富有洞察力的调查和分析。在本文中,我们调查了影响不同环境下自我监督有用性的因素。我们展示了一些关于自我监督,生成和对比方法的两种不同方法的关键见解。我们还调查了监督对抗培训的局限性以及自我监督如何帮助克服这些限制。然后,我们继续讨论有效利用自我监督对视觉任务的局限性和挑战。最后,我们突出了一些打开的问题,并指出了未来的研究方向。
translated by 谷歌翻译
高动态范围(HDR)视频提供比标准低动态范围(LDR)视频更具视觉上的体验。尽管HDR成像具有重要进展,但仍有一个具有挑战性的任务,可以使用传统的现成摄像头捕获高质量的HDR视频。现有方法完全依赖于在相邻的LDR序列之间使用致密光流来重建HDR帧。然而,当用嘈杂的框架应用于交替的曝光时,它们会导致颜色和暴露的曝光不一致。在本文中,我们提出了一种从LDR序列与交替曝光的LDR序列的HDR视频重建的端到端GAN框架。我们首先从Noisy LDR视频中提取清洁LDR帧,并具有在自我监督设置中培训的去噪网络的交替曝光。然后,我们将相邻的交流帧与参考帧对齐,然后在完全的对手设置中重建高质量的HDR帧。为了进一步提高所产生帧的鲁棒性和质量,我们在培训过程中将时间稳定性的正则化术语与成本函数的内容和风格的损耗一起融合。实验结果表明,我们的框架实现了最先进的性能,并通过现有方法生成视频的优质HDR帧。
translated by 谷歌翻译
表格在许多科学文件中简明扼要地存在重要信息。像数学符号,方程和跨越小区等视觉功能使得从研究文件中的表难以实现结构和内容提取。本文讨论了对科学表图像识别的ICDAR 2021竞争的数据集,任务,参与者的方法和结果。具体地,竞争的任务是将表格图像转换为其相应的乳胶源代码。我们提出了两个子组织。在子任务1中,我们要求参与者从图像重建乳胶结构代码。在子任务2中,我们要求参与者从图像重建乳胶内容代码。本报告描述了数据集和地面真理规范,详细介绍了使用的性能评估指标,提出了最终结果,并总结了参与方法。 vcgroup的提交提交了子任务1和55%的最高精确匹配的准确度分数,适用于子任务2,分别以5%和12%击败以前的基线。虽然仍然可以对模型的识别能力进行改进,但是通过挑战从业者解决特定限制和分享方法的问题,促进了全自动表识别系统的开发。该平台将在https://competitions.codalab.org/competitions/26979中留下挑战后的提交。
translated by 谷歌翻译
Classically, the development of humanoid robots has been sequential and iterative. Such bottom-up design procedures rely heavily on intuition and are often biased by the designer's experience. Exploiting the non-linear coupled design space of robots is non-trivial and requires a systematic procedure for exploration. We adopt the top-down design strategy, the V-model, used in automotive and aerospace industries. Our co-design approach identifies non-intuitive designs from within the design space and obtains the maximum permissible range of the design variables as a solution space, to physically realise the obtained design. We show that by constructing the solution space, one can (1) decompose higher-level requirements onto sub-system-level requirements with tolerance, alleviating the "chicken-or-egg" problem during the design process, (2) decouple the robot's morphology from its controller, enabling greater design flexibility, (3) obtain independent sub-system level requirements, reducing the development time by parallelising the development process.
translated by 谷歌翻译
Data compression is becoming critical for storing scientific data because many scientific applications need to store large amounts of data and post process this data for scientific discovery. Unlike image and video compression algorithms that limit errors to primary data, scientists require compression techniques that accurately preserve derived quantities of interest (QoIs). This paper presents a physics-informed compression technique implemented as an end-to-end, scalable, GPU-based pipeline for data compression that addresses this requirement. Our hybrid compression technique combines machine learning techniques and standard compression methods. Specifically, we combine an autoencoder, an error-bounded lossy compressor to provide guarantees on raw data error, and a constraint satisfaction post-processing step to preserve the QoIs within a minimal error (generally less than floating point error). The effectiveness of the data compression pipeline is demonstrated by compressing nuclear fusion simulation data generated by a large-scale fusion code, XGC, which produces hundreds of terabytes of data in a single day. Our approach works within the ADIOS framework and results in compression by a factor of more than 150 while requiring only a few percent of the computational resources necessary for generating the data, making the overall approach highly effective for practical scenarios.
translated by 谷歌翻译
We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP). At each time instant, a user contributes a sample, and the users can arrive in arbitrary order. Until now these requirements of continual release and user-level privacy were considered in isolation. But, in practice, both these requirements come together as the users often contribute data repeatedly and multiple queries are made. We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $\varepsilon$-DP and has the following error guarantee: Denoting by $M_t$ the maximum number of samples contributed by a user, as long as $\tilde{\Omega}(1/\varepsilon)$ users have $M_t/2$ samples each, the error at time $t$ is $\tilde{O}(1/\sqrt{t}+\sqrt{M}_t/t\varepsilon)$. This is a universal error guarantee which is valid for all arrival patterns of the users. Furthermore, it (almost) matches the existing lower bounds for the single-release setting at all time instants when users have contributed equal number of samples.
translated by 谷歌翻译
Nonnegative matrix factorization can be used to automatically detect topics within a corpus in an unsupervised fashion. The technique amounts to an approximation of a nonnegative matrix as the product of two nonnegative matrices of lower rank. In this paper, we show this factorization can be combined with regression on a continuous response variable. In practice, the method performs better than regression done after topics are identified and retrains interpretability.
translated by 谷歌翻译
Despite the remarkable success achieved by graph convolutional networks for functional brain activity analysis, the heterogeneity of functional patterns and the scarcity of imaging data still pose challenges in many tasks. Transferring knowledge from a source domain with abundant training data to a target domain is effective for improving representation learning on scarce training data. However, traditional transfer learning methods often fail to generalize the pre-trained knowledge to the target task due to domain discrepancy. Self-supervised learning on graphs can increase the generalizability of graph features since self-supervision concentrates on inherent graph properties that are not limited to a particular supervised task. We propose a novel knowledge transfer strategy by integrating meta-learning with self-supervised learning to deal with the heterogeneity and scarcity of fMRI data. Specifically, we perform a self-supervised task on the source domain and apply meta-learning, which strongly improves the generalizability of the model using the bi-level optimization, to transfer the self-supervised knowledge to the target domain. Through experiments on a neurological disorder classification task, we demonstrate that the proposed strategy significantly improves target task performance by increasing the generalizability and transferability of graph-based knowledge.
translated by 谷歌翻译
One of the major errors affecting GNSS signals in urban canyons is GNSS multipath error. In this work, we develop a Gazebo plugin which utilizes a ray tracing technique to account for multipath effects in a virtual urban canyon environment using virtual satellites. This software plugin balances accuracy and computational complexity to run the simulation in real-time for both software-in-the-loop (SITL) and hardware-in-the-loop (HITL) testing. We also construct a 3D virtual environment of Hong Kong and compare the results from our plugin with the GNSS data in the publicly available Urban-Nav dataset, to validate the efficacy of the proposed Gazebo Plugin. The plugin is openly available to all the researchers in the robotics community. https://github.com/kpant14/multipath_sim
translated by 谷歌翻译
In this work, we used a semi-supervised learning method to train deep learning model that can segment the brain MRI images. The semi-supervised model uses less labeled data, and the performance is competitive with the supervised model with full labeled data. This framework could reduce the cost of labeling MRI images. We also introduced robust loss to reduce the noise effects of inaccurate labels generated in semi-supervised learning.
translated by 谷歌翻译